25 research outputs found

    Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

    Get PDF

    Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

    Get PDF
    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically

    Reservoir computing for temporal data classification using a dynamic solid electrolyte ZnO thin film transistor

    Get PDF
    The processing of sequential and temporal data is essential to computer vision and speech recognition, two of the most common applications of artificial intelligence (AI). Reservoir computing (RC) is a branch of AI that offers a highly efficient framework for processing temporal inputs at a low training cost compared to conventional Recurrent Neural Networks (RNNs). However, despite extensive effort, two-terminal memristor-based reservoirs have, until now, been implemented to process sequential data by reading their conductance states only once, at the end of the entire sequence. This method reduces the dimensionality, related to the number of signals from the reservoir and thereby lowers the overall performance of reservoir systems. Higher dimensionality facilitates the separation of originally inseparable inputs by reading out from a larger set of spatiotemporal features of inputs. Moreover, memristor-based reservoirs either use multiple pulse rates, fast or slow read (immediately or with a delay introduced after the end of the sequence), or excitatory pulses to enhance the dimensionality of reservoir states. This adds to the complexity of the reservoir system and reduces power efficiency. In this paper, we demonstrate the first reservoir computing system based on a dynamic three terminal solid electrolyte ZnO/Ta2O5 Thin-film Transistor fabricated at less than 100°C. The inherent nonlinearity and dynamic memory of the device lead to a rich separation property of reservoir states that results in, to our knowledge, the highest accuracy of 94.44%, using electronic charge-based system, for the classification of hand-written digits. This improvement is attributed to an increase in the dimensionality of the reservoir by reading the reservoir states after each pulse rather than at the end of the sequence. The third terminal enables a read operation in the off state, that is when no pulse is applied at the gate terminal, via a small read pulse at the drain. This fundamentally allows multiple read operations without increasing energy consumption, which is not possible in the conventional two-terminal memristor counterpart. Further, we have also shown that devices do not saturate even after multiple write pulses which demonstrates the device’s ability to process longer sequences

    NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

    Full text link
    The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics

    NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

    Get PDF
    The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics

    Bulbar Microcircuit Model Predicts Connectivity and Roles of Interneurons in Odor Coding

    No full text
    <div><p>Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells) linearize the input-output transformation of the principal neurons (mitral cells), unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells). Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb.</p></div

    Model connectivity.

    No full text
    <p>A. Synaptic schematic: Each glomerulus (dotted ellipse) receives input from olfactory receptor neurons (ORNs) expressing a single type of receptor out of many (different colors). Mitral/tufted (M/T) cells take excitatory input onto their dendritic tufts within one glomerulus, directly from ORNs (and via ET cells). ET cells have not been modeled (crossed out) and their input to M/T and PG cells is considered folded into the ORN input. Periglomerular (PG) cells are excited by ORNs (and via ET cells), and in turn inhibit M/T cells within the same glomerulus, thus causing feed-forward inhibition. PG cells also get excitation from M/T cells at reciprocal synapses, thus mediating recurrent inhibition. Further, M/T cells form reciprocal synapses with granule cells on their soma, primary and lateral dendrites, where they excite granule (G) cells which cause recurrent and lateral inhibition. Short-axon (SA) cells have not been modeled (crossed out). B. Visualization of <i>default</i> model having 3 glomeruli each with 2 sister mitral cells, and connecting interneurons. Singly connected granule cells are shown in purple. The jointly and multiply connected (shared) granule cells are shown in yellow and cyan respectively. PG cells are shown in orange. Synaptic connections are not shown, but granule and PG cells connect to nearby mitral dendrites, within their small dendritic extents.</p
    corecore